首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7090篇
  免费   710篇
  国内免费   166篇
测绘学   299篇
大气科学   777篇
地球物理   2538篇
地质学   2821篇
海洋学   388篇
天文学   608篇
综合类   204篇
自然地理   331篇
  2023年   8篇
  2022年   25篇
  2021年   55篇
  2020年   50篇
  2019年   48篇
  2018年   511篇
  2017年   457篇
  2016年   375篇
  2015年   256篇
  2014年   232篇
  2013年   304篇
  2012年   792篇
  2011年   542篇
  2010年   226篇
  2009年   224篇
  2008年   189篇
  2007年   167篇
  2006年   198篇
  2005年   872篇
  2004年   904篇
  2003年   695篇
  2002年   227篇
  2001年   99篇
  2000年   59篇
  1999年   29篇
  1998年   21篇
  1997年   37篇
  1996年   21篇
  1995年   20篇
  1994年   13篇
  1993年   10篇
  1992年   8篇
  1991年   32篇
  1990年   19篇
  1989年   16篇
  1987年   14篇
  1986年   7篇
  1985年   12篇
  1984年   15篇
  1983年   13篇
  1982年   9篇
  1981年   8篇
  1980年   8篇
  1979年   14篇
  1977年   6篇
  1976年   9篇
  1975年   16篇
  1974年   8篇
  1973年   12篇
  1969年   9篇
排序方式: 共有7966条查询结果,搜索用时 18 毫秒
101.
Obtaining depth of closure (DoC) in an accurate manner is a fundamental issue for coastal engineering, since good results for coastal structures and beach nourishment depend mainly on DoC. Currently, there are two methods for obtaining the DoC, mathematical formulations and profile surveys. However, these methods can incur important errors if one does not take into account the characteristics and morphology of the area, or if one does not have a sufficiently long time series. In this work the DoC is obtained from the break in the trend of the sediment with the depth, that is, in general with the increase of the depth a decrease in the size of the sediment takes place. However, at one point this tendency changes and the size increases, and then decreases again. When comparing the point where the minimum sediment size occurs before the increase, it is observed that the error incurred is small compared to other methods. If the Standard Deviation of Depth Change (SDDC) method is considered as the most accurate method, the error incurred by the proposed method is less than 7%. In addition, it can be seen that the dispersion of the sediment method always occurs outside the zone of bar movement. Whereas in the methods of profiles survey (using 2 cm precision profiles), sometimes the DoC is obtained within the active zone of bar movement. In addition, where the relative minimum of the median sediment size is found, and the sizes of 0.063 and 0.125 mm predominate in the composition of the sample. Therefore, this new method allows the precise location of the DoC to be obtained in a fast and simple way. Furthermore, this method has the advantage that it is not affected by the modifications that may be experienced by both the study area and the cross-shore beach profile.  相似文献   
102.
A minimum 1-D seismic velocity model for routine seismic event location purposes was determined for the area of the western Barents Sea, using a modified version of the VELEST code. The resulting model, BARENTS16, and corresponding station corrections were produced using data from stations at regional distances, the vast majority located in the periphery of the recorded seismic activity, due to the unfavorable land–sea distribution. Recorded seismicity is approached through the listings of a joint bulletin, resulting from the merging of several international and regional bulletins for the region, as well as additional parametric data from temporary deployments. We discuss the challenges posed by this extreme network-seismicity geometry in terms of velocity estimation resolution and result stability. Although the conditions do not facilitate the estimation of meaningful station corrections at the farthermost stations, and even well-resolved corrections do not have a convincing contribution, we show that the process can still converge to a stable velocity average for the crust and upper mantle, in good agreement with a priori information about the regional structure and geology, which reduces adequately errors in event location estimates.  相似文献   
103.
Stochastic Environmental Research and Risk Assessment - The joint action of wind-driven rain and wind pressure is the main cause of water penetration in building facades, which causes various...  相似文献   
104.
Flood mitigation should deal with those most sensitive flooding elements to very efficiently release risks and reduce losses. Present the most concerns of flood control are peak level or peak discharge which, however, may not always be the most sensitive flooding element. Actually, along with human activities and climate change, floods bring threats to bear on human beings appear in not only peak level and peak discharge, but also other elements like maximum 24-h volume and maximum 72-h volume. In this paper, by collecting six key flooding intensity indices (elements), a catastrophe progression approach based sensitivity analysis algorithm model is developed to identify the indices that mostly control over the flood intensity. The indices sensitivity is determined through a selected case study in the Wujiang River, South China, based on half a century of flow record. The model results indicate that there is no evident relationship of interplay among the index sensitivities, but the variability of the index sensitivity is closely related to the index variability and the index sensitivity increases with the decrease of index value. It is found that peak discharge is not the most influential flooding factor as is generally thought in this case. The sensitivity value of the maximum 24-h volume is the greatest influential factor among all the other indices, indicating that this index plays a leading role in the flood threat of the Wujiang River, South China. It is inferred that, for the purpose of flood warning and mitigation, the peak flood discharge is not always the most sensitive and dominant index as opposed to the others, depending on the sensitivity.  相似文献   
105.
The variogram is a key parameter for geostatistical estimation and simulation. Preferential sampling may bias the spatial structure and often leads to noisy and unreliable variograms. A novel technique is proposed to weight variogram pairs in order to compensate for preferential or clustered sampling . Weighting the variogram pairs by global kriging of the quadratic differences between the tail and head values gives each pair the appropriate weight, removes noise and minimizes artifacts in the experimental variogram. Moreover, variogram uncertainty could be computed by this technique. The required covariance between the pairs going into variogram calculation, is a fourth order covariance that must be calculated by second order moments. This introduces some circularity in the calculation whereby an initial variogram must be assumed before calculating how the pairs should be weighted for the experimental variogram. The methodology is assessed by synthetic and realistic examples. For synthetic example, a comparison between the traditional and declustered variograms shows that the declustered variograms are better estimates of the true underlying variograms. The realistic example also shows that the declustered sample variogram is closer to the true variogram.  相似文献   
106.
Winds from the North–West quadrant and lack of precipitation are known to lead to an increase of PM10 concentrations over a residential neighborhood in the city of Taranto (Italy). In 2012 the local government prescribed a reduction of industrial emissions by 10% every time such meteorological conditions are forecasted 72 h in advance. Wind forecasting is addressed using the Weather Research and Forecasting (WRF) atmospheric simulation system by the Regional Environmental Protection Agency. In the context of distributions-oriented forecast verification, we propose a comprehensive model-based inferential approach to investigate the ability of the WRF system to forecast the local wind speed and direction allowing different performances for unknown weather regimes. Ground-observed and WRF-forecasted wind speed and direction at a relevant location are jointly modeled as a 4-dimensional time series with an unknown finite number of states characterized by homogeneous distributional behavior. The proposed model relies on a mixture of joint projected and skew normal distributions with time-dependent states, where the temporal evolution of the state membership follows a first order Markov process. Parameter estimates, including the number of states, are obtained by a Bayesian MCMC-based method. Results provide useful insights on the performance of WRF forecasts in relation to different combinations of wind speed and direction.  相似文献   
107.
The increasing importance of understanding the structure of Air Pollution Index (API) makes it necessary to come out with a compositional of API based on its pollutants. This will be more comprehensible for the public and easier to cooperate with authorities in reducing the causes of air pollution. Since five pollutants contribute in determining the API values, API can be shown as a compositional data. This study is conducted based on the data of API and its pollutants collected from Klang city in Malaysia for the period of January 2005 to December 2014. The proportion of each pollutant in API is considered as a component with five components in a compositional API. The existence of zero components in some pollutants, that have no effect on API, is a serious problem that prevents the application of log-ratio transformation. Thus, the approach of amalgamation has been used to combine the components with zero in order to reduce the number of zeros. Also, a multiplicative replacement has been utilized to eliminate the zero components and replace them with a small value that maintains the ratios of nonzero components. Transforming the compositional data to log-ratio coordinates has been done using the additive log ratio transformation, and the transformed series is then modeled by using a VAR model. Four criteria are used to determine the number of lags p of VAR(p) and these are: the Akaike Information, the Schwartz, the Hannan–Quinn and the Final Prediction Error criteria. Based on the results, A VAR (1) model with no constants or trend is considered as the best fitted model and it is used to forecast 12 months ahead. In addition, API values are mainly determined by PM10 that has a proportion close to one most of the time during study period. Therefore, authorities and researchers need to study the sources of PM10 and provide the public with useful information and alternatives in term of reducing the air pollution.  相似文献   
108.
Water resources provide the foundation for human development and environmental sustainability. Water shortage occurs more or less in some regions, which usually causes sluggish economic activities, degraded ecology, and even conflicts and disputes over water use sectors. Game theory can better reflect the behaviors of involved stakeholders and has been increasingly employed in water resources management. This paper presents a framework for the allocation of river basin water in a cooperative way. The proposed framework applies the TOPSIS model combined with the entropy weight to determine stakeholders’ initial water share, reallocating water and net benefit by using four solution concepts for crisp and fuzzy games. Finally, the Fallback bargaining model was employed to achieve unanimous agreement over the four solution concepts. The framework was demonstrated with an application to the Dongjiang River Basin, South China. The results showed that, overall, the whole basin gained more total benefits when the players participated in fuzzy coalitions rather than in crisp coalitions, and \(\left\{ {NHS_{Fuzzy} \,and\, SV_{Crisp} } \right\}\) could better distribute the total benefit of the whole basin to each player. This study tested the effectiveness of this framework for the water allocation decision-making in the context of water management in river basins. The results provide technical support for water right trade among the stakeholders at basin scale and have the potential to relieve water use conflicts of the entire basin.  相似文献   
109.
The identification and accurate quantification of sources or sinks of greenhouse gas (GHG) have become a key challenge for scientists and policymakers working on climate change. The creation of a hydropower reservoir, while damming a river for power generation, converts the terrestrial ecosystems into aquatic and subsequently aerobic and anaerobic decomposition of flooded terrestrial soil organic matter resulting in the emission of significant quantity of GHG to the atmosphere. Tropical/subtropical hydropower reservoirs are more significant sources of GHG compared to boreal or temperate one. This paper aims to estimate the emission factor (gCO2eq./kWh) and net GHG emission from Koteshwar hydropower reservoir in Uttarakhand, India. Further, estimated GHG are compared with those from global reservoirs located in the same eco-region so that its impact could be timely minimized/mitigated. Results have shown that emission factor and net GHG emission of Koteshwar reservoir are, respectively, estimated as 13.87 gCO2eq./kWh and 167.70 Gg C year?1 which are less than other global reservoirs located in the same eco-region. This information could be helpful for the hydropower industries to construct reservoirs in tropical eco-regions.  相似文献   
110.
The estimation of long-term sea level variability is of primary importance for a climate change assessment. Despite the value of the subject, no scientific consensus has yet been reached on the existing acceleration in observed values. The existence of this acceleration is crucial for coastal protection planning purposes. The absence of the acceleration would enhance the debate on the general validity of current future projections. Methodologically, the evaluation of the acceleration is a controversial and still open discussion, reported in a number of review articles, which illustrate the state-of-art in the field of sea level research. In the present paper, the well-proven direct scaling analysis approach is proposed in order to describe the long-term sea level variability at 12 worldwide-selected tide gauge stations. For each of the stations, it has been shown that the long-term sea level variability exhibits a trimodal scaling behaviour, which can be modelled by a power law with three different pairs of shape and scale parameters. Compared to alternative methods in literature, which take into account multiple correlated factors, this simple method allows to reduce the uncertainties on the sea level rise parameters estimation.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号